The Backpropagation Algorithm Functions for the Multilayer Perceptron

نویسندگان

  • MARIUS-CONSTANTIN POPESCU
  • VALENTINA BALAS
  • ONISIFOR OLARU
  • NIKOS MASTORAKIS
چکیده

The attempts for solving linear unseparable problems have led to different variations on the number of layers of neurons and activation functions used. The backpropagation algorithm is the most known and used supervised learning algorithm. Also called the generalized delta algorithm because it expands the training way of the adaline network, it is based on minimizing the difference between the desired output and the actual output, through the downward gradient method (the gradient tells us how a function varies in different directions). Training a multilayer perceptron is often quite slow, requiring thousands or tens of thousands of epochs for complex problems. The best known methods to accelerate learning are: the momentum method and applying a variable learning rate. Key-Words:Backpropagation algorithm, Gradient method, Multilayer perceptron.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result

In the eighties the problem of the lack of an efficient algorithm to train multilayer Rosenblatt perceptrons was solved by sigmoidal neural networks and backpropagation. But should we still try to find an efficient algorithm to train multilayer hardlimit neuronal networks, a task known as a NP-Complete problem? In this work we show that this would not be a waste of time by means of a counter ex...

متن کامل

Fast training of multilayer perceptrons

Training a multilayer perceptron by an error backpropagation algorithm is slow and uncertain. This paper describes a new approach which is much faster and certain than error backpropagation. The proposed approach is based on combined iterative and direct solution methods. In this approach, we use an inverse transformation for linearization of nonlinear output activation functions, direct soluti...

متن کامل

4 . Multilayer perceptrons and back - propagation

Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...

متن کامل

On the Pseudo Multilayer Learning of Backpropagation

Rosenblatt's convergence theorem for the simple perceptron initiated much excitement about iterative weight modifying neural networks. However, this convergence only holds for the class of linearly separable functions, which is vanishingly small compared to arbitrary functions. With multilayer networks of nonlinear units it is possible, though not guaranteed, to solve arbitrary functions. Backp...

متن کامل

Speed-up of error backpropagation algorithm with class-selective relevance

Selective attention learning is proposed to improve the speed of the error backpropagation algorithm for fast speaker adaptation. Class-selective relevance for measuring the importance of a hidden node in a multilayer Perceptron is employed to selectively update the weights of the network, thereby reducing the computational cost for learning. c © 2002 Elsevier Science B.V. All rights reserved.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009